171 research outputs found

    Addendum to Informatics for Health 2017: Advancing both science and practice

    Get PDF
    This article presents presentation and poster abstracts that were mistakenly omitted from the original publication

    Facilitating pre-operative assessment guidelines representation using SNOMED CT

    Get PDF
    Objective: To investigate whether SNOMED CT covers the terms used in pre-operative assessment guidelines, and if necessary, how the measured content coverage can be improved. Pre-operative assessment guidelines were retrieved from the websites of (inter)national anesthesiarelated societies. The recommendations in the guidelines were rewritten to ‘‘IF condition THEN action” statements to facilitate data extraction. Terms were extracted from the IF–THEN statements and mapped to SNOMED CT. Content coverage was measured by using three scores: no match, partial match and complete match. Non-covered concepts were evaluated against the SNOMED CT editorial documentation. Results: From 6 guidelines, 133 terms were extracted, of which 71% (n = 94) completely matched with SNOMED CT concepts. Disregarding the vague concepts in the included guidelines SNOMED CT’s content coverage was 89%. Of the 39 non-completely covered concepts, 69% violated at least one of SNOMED CT’s editorial principles or rules. These concepts were categorized based on four categories: non-reproducibility, classification-derived phrases, numeric ranges, and procedures categorized by complexity. Conclusion: Guidelines include vague terms that cannot be well supported by terminological systems thereby hampering guideline-based decision support systems. This vagueness reduces the content coverage of SNOMED CT in representing concepts used in the pre-operative assessment guidelines. Formalization of the guidelines using SNOMED CT is feasible but to optimize this, first the vagueness of some guideline concepts should be resolved and a few currently missing but relevant concepts should be added to SNOMED CT

    The Use of SNOMED CT for Representing Concepts Used in Preoperative Guidelines

    Get PDF
    The use of guidelines to improve quality of care depends on presenting them in a standard machine-interpretable form and using common terms in guidelines as well as in patient records. In this study, the use of SNOMED CT for representing concepts used in preoperative assessment guidelines was evaluated. Terms used in six of these guidelines were mapped to this terminology. Mappings were presented based on three scores: no match, partial match, and complete match. As eleven of the terms were repeatedly used in different guidelines, we analyzed the results based on “token” and “type” coverage. Of 133 extracted terms from guidelines, 107 terms should be covered by SNOMED CT of which 87% was completely represented by this terminology. Our study showed that SNOMED CT content should be extended before preoperative assessment guidelines can be completely automated

    Using Non-Primitive Concept Definitions for Improving DL-Based Knowledge Bases

    Get PDF
    Medical Terminological Knowledge Bases contain a large number of primitive concept definitions. This is due to the large number of natural kinds that are represented, and due to the limits of expressiveness of the Description Logic used. The utility of classification is reduced by these primitive definitions, hindering the knowledge modeling process. To better exploit the classification utility, we devise a method in which definitions are assumed to be non-primitive in the modeling process. This method aims at the detection of: duplicate concept definitions, underspecification, and actual limits of a DL-based representation. This provides the following advantages: duplicate definitions can be found, the limits of expressiveness of the logic can be made more clearly, and tacit knowledge is identified which can be expressed by defining additional concept properties. Two case studies demonstrate the feasibility of this approach

    Equivalence of pathologists' and rule-based parser's annotations of Dutch pathology reports

    Get PDF
    Introduction: In the Netherlands, pathology reports are annotated using a nationwide pathology network (PALGA) thesaurus. Annotations must address topography, procedure, and diagnosis. The Pathology Report Annotation Module (PRAM) can be used to annotate the report conclusion with PALGA-compliant code series. The equivalence of these generated annotations to manual annotations is unknown. We assess the equivalence of annotations by authoring pathologists, pathologists participating in this study, and PRAM. Methods: New annotations were created for one thousand histopathology reports by the PRAM and a pathologist panel. We calculated dissimilarity of annotations using a semantic distance measure, Minimal Transition Cost (MTC). In absence of a gold standard, we compared dissimilarity scores having one common annotator. The resulting comparisons yielded a measure for the coding dissimilarity between PRAM, the pathologist panel and the authoring pathologist. To compare the comprehensiveness of the coding methods, we assessed number and length of the annotations. Results: Eight of the twelve comparisons of dissimilarity scores were significantly equivalent. Non-equivalent score pairs involved dissimilarity between the code series by the original pathologist and the panel pathologists. Coding dissimilarity was lowest for procedures, highest for diagnoses: MTC overall = 0.30, topographies = 0.22, procedures = 0.13, diagnoses = 0.33. Both number and length of annotations per report increased with report conclusion length, mostly in PRAM-annotated conclusions: conclusion length ranging from 2 to 373 words, number of annotations ranged from 1 to 10 for pathologists, 1–19 for PRAM, annotation length ranged from 3 to 43 codes for pathologists, 4–123 for PRAM. Conclusions: We measured annotation similarity among PRAM, authoring pathologists and panel pathologists. Annotating by PRAM, the panel pathologists and to a lesser extent by the authoring pathologist was equivalent. Therefore, the use of annotations by PRAM in a practical setting is justified. PRAM annotations are equivalent to study-setting annotations, and more comprehensive than routine coding. Further research on annotation quality is needed

    Addendum to Informatics for Health 2017 : advancing both science and practice

    Get PDF
    This article presents presentation and poster abstracts that were mistakenly omitted from the original publication.Publisher PDFPeer reviewe
    • 

    corecore